Close

%0 Conference Proceedings
%4 sid.inpe.br/sibgrapi/2021/09.06.19.40
%2 sid.inpe.br/sibgrapi/2021/09.06.19.40.09
%@doi 10.1109/SIBGRAPI54419.2021.00034
%T Iterative Pseudo-Labeling with Deep Feature Annotation and Confidence-Based Sampling
%D 2021
%A Benato, Barbara Caroline,
%A Telea, Alexandru Cristian,
%A Falcão, Alexandre Xavier,
%@affiliation University of Campinas 
%@affiliation Utrecht University 
%@affiliation University of Campinas
%E Paiva, Afonso ,
%E Menotti, David ,
%E Baranoski, Gladimir V. G. ,
%E Proença, Hugo Pedro ,
%E Junior, Antonio Lopes Apolinario ,
%E Papa, João Paulo ,
%E Pagliosa, Paulo ,
%E dos Santos, Thiago Oliveira ,
%E e Sá, Asla Medeiros ,
%E da Silveira, Thiago Lopes Trugillo ,
%E Brazil, Emilio Vital ,
%E Ponti, Moacir A. ,
%E Fernandes, Leandro A. F. ,
%E Avila, Sandra,
%B Conference on Graphics, Patterns and Images, 34 (SIBGRAPI)
%C Gramado, RS, Brazil (virtual)
%8 18-22 Oct. 2021
%I IEEE Computer Society
%J Los Alamitos
%S Proceedings
%K semi-supervised learning, pseudolabels, optimum path forest, data annotation.
%X Training deep neural networks is challenging when large and annotated datasets are unavailable. Extensive manual annotation of data samples is time-consuming, expensive, and error-prone, notably when it needs to be done by experts. To address this issue, increased attention has been devoted to techniques that propagate uncertain labels (also called pseudo labels) to large amounts of unsupervised samples and use them for training the model. However, these techniques still need hundreds of supervised samples per class in the training set and a validation set with extra supervised samples to tune the model. We improve a recent iterative pseudo-labeling technique, Deep Feature Annotation (DeepFA), by selecting the most confident unsupervised samples to iteratively train a deep neural network. Our confidence-based sampling strategy relies on only dozens of annotated training samples per class with no validation set, considerably reducing user effort in data annotation. We first ascertain the best configuration for the baseline a self-trained deep neural network and then evaluate our confidence DeepFA for different confidence thresholds. Experiments on six datasets show that DeepFA already outperforms the self-trained baseline, but confidence DeepFA can considerably outperform the original DeepFA and the baseline.
%@language en
%3 2021_sibgrapi_Benato-2.pdf


Close